21 research outputs found

    A review of digital forensics methods for JPEG file carving

    Get PDF
    Digital forensics is an important field of cybersecurity and digital crimes investigation. It entails applying file recovery methods to analyze data from storage media and extract hidden, deleted or overwritten files. The recovery process might have accompanied with cases of unallocated partitions of blocks or clusters and the absence of file system metadata. These cases entail advance recovery methods that have carving abilities. The file carving methods include different types of techniques to identify, validate and reassemble the file. This paper presents a comprehensive study of data recovery, file carving, and file reassembling. It focuses on identifying and recovering JPEG Images as it is a wildly covered in the literature. It classifies the carving techniques into three types: signature-, structure-, and content-based carvers. Subsequently, the paper reviews seven advanced carving methods in the literature. Finally, the paper presents a number of research gaps and conclude a number of possible improvements. Generally, both the gaps and possible improvements are associated with the fragmentation problem of data files

    Consistency check between XML schema and class diagram for document versioning

    Get PDF
    A consistency check between design and implementation is usually done in order to check the correctness of the system’ requirements. However, if the requirements are changed over time, then the document versioning occurred within the requirements. For XML Schema, document versioning exists when there is a change in the XML Schema from its previous Schema. In order to detect the versioning of both XML Schemas, consistency rules check need to be performed to both class diagrams produced by both Schemas. The consistency between two XML Schemas are checked based on transformation rules and versioning rules. Transformation rules are used for translating the XML Schema into class diagram and versioning rules are used for checking the existing of document changes between two XML Schemas. Once two XML Schemas are different the consistency rules will be used for the consistency check. This paper presents an approach based on transformation rules and versioning rules to check consistency between XML Schema and UML class diagram when document versioning exist. The approach is then used for the case study to show how the consistency is checked in order to detect the versioning of two different XML Schemas. Based on the case study, the approach shows that two XML Schemas can be checked for their consistency when document versioning exist

    The architecture of smart city in Malaysia by utilizing IOTs

    Get PDF
    Over the past few years, smart city components have been vastly discussed. Which components are considered necessary in order to build a smart city. This paper discusses the components that have been identified necessary in order to build the architecture for smart city in Malaysia. The architecture for smart city has been designed based on utilization of Internet of Things (IOTs)

    Consistency Check between XML Schema and Class Diagram for Document Versioning

    Get PDF
    A consistency check between design and implementation is usually done in order to check the correctness of the system’ requirements. However, if the requirements are changed over time, then the document versioning occurred within the requirements. For XML Schema, document versioning exists when there is a change in the XML Schema from its previous Schema. In order to detect the versioning of both XML Schemas, consistency rules check need to be performed to both class diagrams produced by both Schemas. The consistency between two XML Schemas is checked based on transformation rules and versioning rules. Transformation rules are used for translating the XML Schema into the class diagram, and versioning rules are used for checking the existing of document changes between two XML Schemas. Once two XML Schemas are different the consistency rules will be used for the consistency check. This paper presents an approach based on transformation rules and versioning rules to check consistency between XML Schema and UML class diagram when document versioning exist. The approach is then used for the case study to show how the consistency is checked in order to detect the versioning of two different XML Schemas. Based on the case study, the approach shows that two XML Schemas can be checked for their consistency when document versioning exist.

    Validation of requirements for transformation of an urban district to a smart city

    Get PDF
    The concept of a smart city is still debatable and yet gives attention to every country around the globe to provide their community with a better quality of life. New ideas for the development of a smart city have always evolved to enhance the quality, performance, and interactivity of services. This paper presents a model of a smart city based on the comparison of the chosen smart cities in the world and used the model to validate the requirements for the transformation of an urban district to a smart city. The proposed model for a smart city in this paper focuses on two major components, which are by utilizing IoTs (Internet of Things) in forming a model for a smart city and incorporating culture diversity. The relationship of components and culture influence are the foundation of designing the model of a smart city. In this research, the model of a smart city has been validated based on the requirements analysis from the survey instrument and the results show that the average mean of each element used is more than 4 out of 5. The model of a smart city can be used as a guideline for transformation of an urban district to a smart city

    Comparing the legendre wavelet filter and the gabor wavelet filter for feature extraction based on iris recognition system

    Get PDF
    Iris recognition system is today among the most reliable form of biometric recognition. Some of the reasons why the iris recognition system is reliable include; Iris never changes due to ageing and individual can be recognized with their irises from long distances up to 50m away. The iris recognition system process includes four main steps. The four main steps are; iris image acquisition, preprocessing, feature extraction and matching, which makes the processes in recognizing an individual with his or her iris. However, most researchers recognized feature extraction as a critical stage in the recognition process. The stage is tasked with extracting unique feature of the individual to be recognized. Different algorithm over two-decade has been proposed to extract features from the iris. This research considered the Gabor filter, which is one of the most used and Legendre wavelet filters. We also apply them on three different datasets; CASIA, UBIRIS and MMU databases. Then we evaluate and compare based on the False Acceptance Rate (FAR), False Rejection Rate (FRR), Genuine Acceptance Rate (GAR) and their accuracy. The result shows a significate increase in recognition accuracy of the Legendre wavelet filter against the Gabor filter with up to 5.4% difference when applied with the UBIRIS database

    Statistical analysis, ciphertext only attack, improvement of generic quasigroup string transformation and dynamic string transformation

    Get PDF
    Algebraic functions are the primitives that strengthen the cryptographic algorithms to ensure confidentiality of data and information. There is need for continues development of new and improvement of existing primitives. Quasigroup String transformation is one of those primitives that have many applications in cryptographic algorithms, Hash functions, and Pseudo-Random Number Generators. It is obvious that randomness and unpredictability is the requirement of every Cryptographic primitive. Most of those string transformations have not been implemented properly neither do they have security analysis. Cryptanalysis of existing scheme is as important as building new ones. In this paper, generic Quasigroup sting transformation is analyzed and found vulnerable to Ciphertext-Only-Attack. An adversary can compute the ciphertext to get the plaintext without prior knowledge of the plaintext. Pseudorandom numbers produced with generic string transformation can be reversed back to the original input with little effort. Therefore the generic quasigroup string transformation is compared with recently introduced string transformation and it is expected to provide better randomness and resistant to ciphertext-only-Attack. The proposed string transformation is suitable to one-way functions such as Hash functions, and pseudorandom number generators to mitigate the vulnerability of quasigroup string transformation to Ciphertext-Only-Attack. While the dynamic string transformation increase the difficulty level of predicting the substitution table used. The algorithms will be compared in terms of randomness using NIST statistical test suit, correlation Assessment and frequency Distribution

    An efficient iris image thresholding based on binarization threshold in black hole search method

    Get PDF
    In iris recognition system, the segmentation stage is one of the most important stages where the iris is located and then further segmented into outer and lower boundary of iris region. Several algorithms have been proposed in order to segment the outer and lower boundary of the iris region. The aim of this research is to identify the suitable threshold value in order to locate the outer and lower boundaries using Black Hole Search Method. We chose these methods because of the ineffient features of the other methods in image indetification and verifications. The experiment was conducted using three data set; UBIRIS, CASIA and MMU because of their superiority over others. Given that different iris databases have different file formats and quality, the images used for this work are jpeg and bmp. Based on the experimentation, most suitable threshold values for identification of iris aboundaries for different iris databases have been identified. It is therefore compared with the other methods used by other researchers and found out that the values of 0.3, 0.4 and 0.1 for database UBIRIS, CASIA and MMU respectively are more accurate and comprehensive. The study concludes that threshold values vary depending on the database

    Extreme learning machine classification of file clusters for evaluating content-based feature vectors

    Get PDF
    In the digital forensic investigation and missing data files retrieval in general, there is a challenge of recovering files that have missing system information. The recovery process entails applying a number of methods to determine the type, the contents and the structure of each data file clusters such as JPEG, DOC, ZIP or TXT. This paper studies the effects of three content-based features extraction methods in improving the classification of JPEG File clusters. The methods are Byte Frequency Distribution, Entropy, and Rate of Change. Consequently, an Extreme Learning Machine (ELM) neural network algorithm is used to evaluate the performance of the three methods in which it classifies the class label of the feature vectors to JPEG and Non-JPEG images for files in different file formats. The files are allocated in a continuous series of clusters. The ELM algorithm is applied to the DFRWS (2006) dataset and the results show that the combination of the three methods produces 93.46% classification accuracy

    Key generation technique based on triangular coordinate extraction for hybrid cubes

    Get PDF
    Cryptographic algorithms play an important role in information security where it ensures the security of data across the network or storage. The generation of Hybrid Cubes (HC) based on permutation and combination of integer numbers are utilized in the construction of encryption and decryption key in the non-binary block cipher. In this study, we extend the hybrid cube encryption algorithm (HiSea) and our earlier Triangular Coordinate Extraction (TCE) technique for HC by increasing the complexity in the mathematical approaches. We proposed a new key generation technique based on TCE for the security of data. In this regard, the Hybrid Cube surface (HCs) is divided into four quarters by the intersection of primary and secondary diagonal and each quarter is rotated by using the rotation points. The overall security of HC is improved by the rotation of HCs and enhanced the complexity in the design of key schedule algorithm. The brute force and entropy test are applied in experimental results which proved that the proposed technique is suitable for implementing a key generation technique and free from any predicted keys pattern
    corecore